Efficient and sparse neural networks by pruning weights in a multiobjective learning approach
نویسندگان
چکیده
Overparameterization and overfitting are common concerns when designing training deep neural networks, that often counteracted by pruning regularization strategies. However, these strategies remain secondary to most learning approaches suffer from time computational intensive procedures. We suggest a multiobjective perspective on the of networks treating its prediction accuracy network complexity as two individual objective functions in biobjective optimization problem. As showcase example, we use cross entropy measure while adopting an l1-penalty function assess total cost (or complexity) parameters. The latter is combined with intra-training approach reinforces reduction requires only marginal extra cost. From optimization, this truly large-scale compare different paradigms: On one hand, adopt scalarization-based transforms problem into series weighted-sum scalarizations. other hand implement stochastic multi-gradient descent algorithms generate single Pareto optimal solution without requiring or using preference information. In first case, favorable knee solutions identified repeated runs adaptively selected scalarization Numerical results exemplary convolutional confirm large reductions negligible loss possible.
منابع مشابه
Learning both Weights and Connections for Efficient Neural Networks
Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems. Also, conventional networks fix the architecture before training starts; as a result, training cannot improve the architecture. To address these limitations, we describe a method to reduce the storage and computation required by neural networks by an order of magnitude w...
متن کاملPruning Convolutional Neural Networks for Resource Efficient Transfer Learning
We propose a new framework for pruning convolutional kernels in neural networks to enable efficient inference, focusing on transfer learning where large and potentially unwieldy pretrained networks are adapted to specialized tasks. We interleave greedy criteria-based pruning with fine-tuning by backpropagation—a computationally efficient procedure that maintains good generalization in the prune...
متن کاملReinforcement Learning in Neural Networks: A Survey
In recent years, researches on reinforcement learning (RL) have focused on bridging the gap between adaptive optimal control and bio-inspired learning techniques. Neural network reinforcement learning (NNRL) is among the most popular algorithms in the RL framework. The advantage of using neural networks enables the RL to search for optimal policies more efficiently in several real-life applicat...
متن کاملReinforcement Learning in Neural Networks: A Survey
In recent years, researches on reinforcement learning (RL) have focused on bridging the gap between adaptive optimal control and bio-inspired learning techniques. Neural network reinforcement learning (NNRL) is among the most popular algorithms in the RL framework. The advantage of using neural networks enables the RL to search for optimal policies more efficiently in several real-life applicat...
متن کاملSparse connection and pruning in large dynamic artificial neural networks
This paper presents new methods for training large neural networks for phoneme probability estimation. A combination of the time-delay architecture and the recurrent network architecture is used to capture the important dynamic information of the speech signal. Motivated by the fact that the number of connections in fully connected recurrent networks grows super-linear with the number of hidden...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Computers & Operations Research
سال: 2022
ISSN: ['0305-0548', '1873-765X']
DOI: https://doi.org/10.1016/j.cor.2021.105676